Back to Main Menu

Creating a Local Backup of Assetic Data (Assetic Python SDK)

This article will demonstrate the use of a python script to automate SQL Server database syncing. The underlying process the script relies on is the functionality to bulk export using the advanced search feature. For an overview of using Advanced Search to bulk export data, refer to this article on the Integration page.

Local SQL Server database sync using Assetic Python SDK

This process is used to export Assetic data from Assetic to a local SQL Server database.

 

The data to be exported is defined via Search Profiles created via the Advanced Search.

 
Note: The sync process requires the GUID of the Search Profile used to define the export data. To obtain this identifier simply open the Advanced Search Profile in Assetic and copy the GUID part from the URL in the browser navigation bar.

In the example screenshot below the full URL is:
https://xxx.assetic.net/Search/AdvancedSearchProfile/SPDashboard/Default/ff03c4bd-4f1d-e611-9458-06edd62954d7?loadprofile=True. 
The GUID therefore is: ff03c4bd-4f1d-e611-9458-06edd62954d7 

 
Note: It is recommended to include in the search profile the field the 'Last Modified Date' of the search records. This enables the export to skip if there have been no changes to the data

 

The Assetic Python sync process wraps the Assetic REST API for bulk exporting search profiles and manages the updating of the local SQL Server database.
The database is defined when running the sync process.
The sync process needs to be scheduled to run via a scheduler such as the Windows Task Scheduler.

 

Where more than 10,000 records are returned by the Search Profile the export becomes a 2-step process since it utilises the Assetic background worker process to generate the export.  For less than 10,000 the export is immediate unless specified otherwise using one of the parameters available when initiating the export.

 

For search profiles that include a last modified date field, there is a process that allows the export to be skipped if the data hasn't changed (the actual field name for the last modified date differs between modules)

 
Note: The user will need to have "create temporary table" rights since the downloaded data is first dumped to a temporary table and then used in an SQL merge to apply the inserts and updates.

Quick Start

See below for 'Advanced Options' options

1. Create management table "assetic_sync_manager"

The sync process utilises a table called "assetic_sync_manager". This table must be in the same database as the target table.

 

The table may be created via the following SQL:

CREATE TABLE assetic_sync_manager(  [oid]  [uniqueidentifier] NULL,
[taskid] [nvarchar](50) NULL,
[profileid] [nvarchar](50) NULL,
[profilename] [nvarchar](100) NULL,
[tablename] [nvarchar](50) NULL,
[keyfield] [nvarchar](50) NULL,
[logtable] [nvarchar](50) NULL,
[documentid] [nvarchar](50) NULL,
[useinternalnames] [bit] NULL,
[replacespaces] [bit] NULL,
[spacedelimiter] [nvarchar](50) NULL,
[setemptyasnull] [bit] NULL,
[allowdelete] [bit] NULL,
[date_initiated] [datetime] NULL,
[date_last_modified] [datetime] NULL,
[status] [nvarchar](50) NULL,
[replace_special] [nvarchar](500) NULL,
[truncation_indicator] [nvarchar](50) NULL,
[total_records] [int] NULL,
[processed_records] [int] NULL )

Note: [replacespaces] introduced in SDK release 2016.12.1.2. A check is made for this field and if missing it is added.
 
Note: [spacedelimiter] introduced in SDK release 2017.3.1.6. A check is made for this field and if missing it is added.
 
Note: [setemptyasnull] introduced in SDK release 2018.11.1.2. A check is made for this field and if missing it is added.
 
Note: [allowdelete] introduced in SDK release 2018.26.1.0. A check is made for this field and if missing it is added.
 
Note: [replace_special] and [truncation_indicator] introduced in SDK release 2019.13.2.3. A check is made for these fields and if missing are added.
 
Note: [total_records] and [processed_records] introduced in SDK release 2021.4.1.0. A check is made for these fields and if missing they are missing, they are added.

2. Define sync parameters

The sync process requires information about the search profile(s).
This definition is provided via the Assetic Python SDK object "SearchProfileRepresentation". The object has the following fields:

Parameter Description
profileguid The GUID of the search profile used for the data export.
profilename The name of the search profile. Not actually required for the sync process, but provides the user-friendly name of the defined profile
database The name of the target database (not the database server instance. The server connection is defined when the script is run)
tablename The name of the target database table that the search data is synced to.
keyfield The name of the key field in the search data that uniquely identifies each record in the data. Note that this field should be encapsulated in square braces '[]' if there are whitespaces in the field name in the target database table. Examples are '[asset id]' or ['work order id']
logtable The name of the log table that logs inserts and updates to the target table each time the process is run
useinternalnames

Boolean (True/False). If True then the Assetic internal field names are used for the database column names, rather than the user-friendly labels in the search result headers.

replacespaces

Boolean (True/False). If True and useinternalnames = False then the user-friendly labels in the search result headers are stripped of any whitespaces and used for the database column names. This means the database field names have no spaces but use the user-friendly headers. (Added in SDK release 2016.12.1.2)

spacedelimiter

A character(s) to use when replacing whitespaces in the user-friendly labels. Use in conjunction with 'replacespaces=True'. (Added in SDK release 2017.3.1.5). Allows the original user-friendly labels to be reconstructed by substituting the spacedelimiter with whitespaces.

set_empty_as_null

Boolean (True/False). If 'True', when a field is inserted/updated use NULL where the field is an empty string.  The default is 'False' which is to insert/update the field with an empty string

allow_delete

Boolean (True/False). If 'True', records that exist in the target database table are deleted from the target database table if not in the records returned from Assetic. Deleted records are written to the log table and can be manually inserted back into the target database table if deleted in error. The default is 'False'.

replace_special_characters

Some Assetic field labels have characters such as brackets, dashes, or forward slashes. Although valid when creating an SQL Server database column name, these characters may be a problem for applications that use this data. This setting allows the generated database column name to be modified to exclude these characters and optionally replace with different characters(s). Define a dictionary where the key is the character to replace and the value is the character (or empty string) replacement. The default is to create the table name to match the label. Note that the 'replace_spaces' and 'spacedelimiter' settings should still be used to strip whitespaces from the column name.

truncation_indicator

In some instances, it may be desirable to truncate text data that is longer than the database column length. The truncation indicator is a string that is appended to the truncated data to indicate the record has been truncated. For example, it may be "...". The data is truncated to the length of the column, less the length of the truncation indicator string. An empty string may be used to indicate truncation is required, but no suffix is required. The default is to not truncate the data which will cause an error should any text data exceed the length of the database column.

background_export

The default is 'False'. A boolean flag to specify if an individual search profile will export via a background worker process. If set to 'True', this parameter can be used to bypass the 'export immediate' process for a search profile with results of less than 10,000 records and ensure it is instead exported via a background worker.

If there is a requirement to have all exports initiated via a background worker, the 'always_background_export' boolean parameter can be provided instead when initiating the 'dbsync_initiate()' method, which is outlined below.

3. Initiate the sync

The sync process is initiated via the Assetic Python SDK method SyncToLocalProcesses.dbsync_initiate().
It has the following mandatory parameters:
dbserver: database instance (SQL Server)
dbname: database name. Assumes trusted connection
profiles: an array of SearchProfileRepresentation instances

 

There are also 6 optional parameters:
incremental: This is a boolean to indicate whether there should be a full export of all data, or just an export of changes since the last date. This option is currently disabled, the sync always performs a full export (i.e. incremental = False)

 

username: SQL Server login username

 

password: SQL Server password

 

driver: SQL Server ODBC driver. The default is "ODBC Driver 13 for SQL Server" (Assumes this driver is installed)

 

custom_field_list: Use to set a database field name that cannot be generated by the default field name options. Refer to 'Advanced Options' options for further detail.

 

force_export: Force the export to be initialised even if there have been no changes to the records. Useful if the number of fields in the search profile has changed. Not relevant if the search profile doesn't contain a 'last modified' field.

 

always_background_export: The default is "False". If "True" then all exported profiles will initiate via an Assetic background worker process. This can be used to ensure that all search profile exports will have a corresponding export file, including search profiles with less than 10,000 records that would otherwise be created immediately.

 

The following example creates 2 separate export jobs for 2 different search profiles

 

  1. """
  2. Assetic.BulkExportInitialise.py
  3. Initialises the bulk export process.
  4. Define a set of search profiles to be exported, and the corresponding db tables
  5. to write the data to, and log changes to
  6. When this py script is run it uses the Assetic 'export all' api to initiate the
  7. export, which uses the Assetic background worker to queue and execute the export
  8. A second 'finalise' step run via a separate py script to check to see if the
  9. export is complete, and if so downloads the exported data
  10. Assumes the user running the script can establish a connection with the
  11. database
  12. Full documentation: https://assetic.zendesk.com/hc/en-us/articles/236156708-Bulk-Export-Process-Assetic-Python-SDK-
  13. Author: Kevin Wilton (Assetic)
  14. """
  15. import assetic
  16. # Define location of configuration file, log file, log level
  17. asseticsdk = assetic.AsseticSDK("C:/users/myself/assetic.ini", None, "Error")
  18. sync = assetic.SyncToLocalProcesses()
  19. # Define an empty list of profiles
  20. profiles = []
  21. # Profile for 'Assets Sync'
  22. profilemapping = assetic.SearchProfileRepresentation()
  23. profilemapping.profileguid = "d9a350d7-8078-e611-946c-06edd62954d7"
  24. profilemapping.profilename = "Water Pressure Pipes"
  25. profilemapping.tablename = "waterpressurepipes"
  26. profilemapping.keyfield = "[Asset Id]"
  27. profilemapping.logtable = "waterpressurepipes_log"
  28. profilemapping.useinternalnames = False
  29. # Use field labels which will have whitespaces, so remove the whitespaces
  30. profilemapping.replacespaces = True
  31. profilemapping.spacedelimiter = "_" # Replace whitespace with "_"
  32. # If the data is an empty string set it as NULL in the database
  33. profilemapping.set_empty_as_null = True
  34. # If a record is no longer in the Assetic data then remove it from the DB
  35. profilemapping.allow_delete = True
  36. # The column name may have characters such as "/". Replace with a "_"
  37. profilemapping.replace_special_characters = {
  38. "(": "_",
  39. ")": "_",
  40. "-": "_",
  41. "/": "_"
  42. }
  43. # Rather than ensuring the target columns are long enough, truncate the data
  44. profilemapping.truncation_indicator = "..."
  45. # If a profile has less than 10,000 records, force it to export via background worker
  46. profilemapping.background_export = True
  47. # add profile definition to profiles list
  48. profiles.append(profilemapping)
  49. # Profile for 'Component Sync'
  50. profilemapping = assetic.SearchProfileRepresentation()
  51. profilemapping.profileguid = "65ffd995-4e76-e611-946c-06edd62954d7"
  52. profilemapping.profilename = "All Components"
  53. profilemapping.tablename = "allcomponents"
  54. profilemapping.keyfield = "[Component Id]"
  55. profilemapping.logtable = "allcomponents_log"
  56. profilemapping.useinternalnames = True
  57. profilemapping.background_export = False
  58. profiles.append(profilemapping)
  59. # Initialise the export.
  60. sync.dbsync_initiate('dbserver', 'asseticsync', profiles,
  61. force_export=False, always_background_export=False)

This process may be scheduled to run on a periodic basis, such as weekly

4.  Finalise sync

The Assetic Python SDK method SyncToLocalProcesses.dbsync_finalise() is used to check the status of export tasks, and if the export task is complete, the CSV file data is downloaded and applied to the target table in the database. The database table assetic_sync_manager is used to get a list of all tasks that are flagged as 'In Process'.

 

The "dbsync_finalise" method is also run as a scheduled process (at a later time to the SyncToLocalProcesses.dbsync_initiate) as there is no "notification service" to trigger the finalisation.

 

SyncToLocalProcesses.dbsync_finalise has the following mandatory parameters:

 

dbserver: database instance (SQL Server)

 

dbname: database name. Assumes trusted connection

 

There are also 4 optional parameters:
username: SQL Server login username

 

password: SQL Server password

 

driver: SQL Server ODBC driver. The default is "ODBC Driver 13 for SQL Server" (Assumes this driver is installed)

 

custom_field_list: Use to set a database field name that cannot be generated by the default field name options. Refer to 'Advanced Options' options for further detail.

 

In the example below it is assumed the user is able to make a trusted connection to the database, and that the driver "ODBC Driver 13 for SQL Server" is installed.

 

  1. """
  2. Assetic.BulkExportFinalise.py
  3. Finalises the bulk export process.
  4. When this py script is run it checks to see if there are any outstanding bulk
  5. exports awaiting download. This is indicated by the database table
  6. assetic_sync_manager where status of an export is 'In Progress'
  7. The Assetic API's are used to check if the export is ready for download and if
  8. so, the data is downloaded and sync with the current data.
  9. The 'initialise' process defines the database table to sync to
  10. Full documentation: https://assetic.zendesk.com/hc/en-us/articles/236156708-Bulk-Export-Process-Assetic-Python-SDK-
  11. Author: Kevin Wilton (Assetic)
  12. """
  13. import assetic
  14. # Set logging to "Info" level
  15. asseticsdk = assetic.AsseticSDK("C:/Users/myself/assetic.ini", None, "Info")
  16. sync = assetic.SyncToLocalProcesses()
  17. # Using trusted connection
  18. response = sync.dbsync_finalise("dbserver", "asseticsync")

As with the script to initiate the process, if the driver "ODBC Driver 13 for SQL Server" is not installed the driver may be specified. Also if an SQL Server username and password login are to be used instead of a trusted connection then this may be specified. In the example below the driver "SQL Server Native Client 11.0" is specified along with username and password.

sync.dbsync_finalise('dbserver','asseticsync',username='fred',password='mypwd',
driver="ODBC Driver 11 for SQL Server")

How it works

The process uses the Assetic Python SDK. It is first loaded and authenticated via the ini file (first parameter). The second parameter is the name of the log file. The third parameter is the logging level - an option selected from 'debug', 'warning' or 'info'.

import assetic ##initialise Assetic Python SDK.  Set logging mode to 'info' asseticsdk = assetic.AsseticSDK('C:/Users/myself/assetic.ini',None,'info')

An instance of the Assetic SDK "SyncToLocalProcesses" library is then initiated

sync = assetic.SyncToLocalProcesses()

The search profiles to be exported are defined as an array:

# define an empty list of profiles profiles =  []

For each search profile, the export parameters are defined

##create a profile instance and set values profilemapping = assetic.SearchProfileRepresentation() profilemapping.profileguid =  "d9a350d7-8078-e611-946c-06edd62954d7" profilemapping.profilename =  "Water Pressure Pipes" profilemapping.tablename =  "waterpressurepipes" profilemapping.keyfield =  "[Asset Id]" profilemapping.logtable =  "waterpressurepipes_log" profilemapping.useinternalnames =  False  # Use field labels which will have whitespaces, so remove the whitespaces profilemapping.replacespaces =  True profilemapping.spacedelimiter =  "_"  # Replace whitespace with "_"  # If the data is an empty string set it as NULL in the database profilemapping.set_empty_as_null =  True  # If a record is no longer in the Assetic data then remove it from the DB profilemapping.allow_delete =  True  # The column name may have characters such as "/".  Replace with a "_" profilemapping.replace_special_characters =  {  "(":  "_",  ")":  "_",  "-":  "_",  "/":  "_"  }  # Rather than ensuring the target columns are long enough, truncate the data profilemapping.truncation_indicator =  "..."
# If a profile has less than 10,000 records, force it to export via background worker
profilemapping.background_export = True

The profile is then added to the array

# add profile definition to profiles list profiles.append(profilemapping)

The following code initiates the export task with the parameters defined in the table below

Parameter Description
dbserver Name of the database server instance
dbname Name of the database
profiles An array of assetic.SearchProfileRepresentation instances that define the profiles to export and the tables to export to
incremental Placeholder parameter, currently not implemented. Intended to support incremental syncing to export only those records that have changed since the last successful sync. Will use the search API (10,000 record limit) rather than the search export API. This requires less data to be downloaded and does not require the background worker process
force_export Initiate the export even if no records have changed.  The default is 'False', set to 'True' to force the export.
always_background_export The default is "False". If "True" then all profiles will export via an Assetic background worker process. This can be used to ensure that all search profile exports will have a corresponding export file, including search profiles with less than 10,000 records that would otherwise be created immediately.
# Initialise the export. 
sync.dbsync_initiate('dbserver', 'asseticsync', profiles, force_export=False, always_background_export=False)

dbsync_initiate uses the Assetic REST API endpoint POST /api/v2/search/{id}/export to create a new export task. The ID of the task is recorded in the table assetic_sync_manager.

 

Since the background worker process queues jobs, the export may not be created immediately.

 
Note: This is the same process as the 'Export All' button in the Assetic application, and the status of the export initiated via the API may also be viewed in the Assetic application as shown below.

Export skipped if no changes

The export process will be skipped if:

  • The target database table is not empty and contains records (a comparison is not made of the number of records in the database compared to the search)
  • and the number of records returned by the search since the last export hasn't changed
  • and the search profile contains a 'last modified date' field and there a no records with a more recent last modified date that the DateTime the export was last started (the start time of the background process rather than the time that the process was added to the queue).

 

The name of the 'last modified date' field differs between search modules as indicated by the table below. The process also relies on being the key field being set according to the module to allow the correct name for the date field to be applied.

Module Key Field
Field Label Field Name
Assets ComplexAssetId Asset Last Modified ComplexAssetLastModified
Component Index ComponentId Component Last Modified ComponentLastModified
Work Order WorkOrderId Work Order Last Modified Date WOLastModified
Work Request WorkRequestId Work Request Last Modified Date WRLastModified
Assessment Form FormResultId Form Result Last Modified Date FormResultLastModified
Component - Network Measure FriendlyNetworkMeasureRecordId Last Modified Date LastModifiedOn

Finalise Export

The sync "dbsyc_finalise" process is run after the "dbsync_initiate" process, allowing time in between for the background worker to process the export. The "dbsync_finalise" may be run several times since it only processes completed exports that are flagged as outstanding.

 

Using the Assetic REST API endpoint GET /api/v2/backgroundworker/{id} this taskid can be used to check the status of the export task.  If the task is complete, the GET /api/v2/backgroundworker/{id} response also returns the document GUID of the CSV file.

 

The Assetic REST API endpoint GET /api/v2/document/{id}/file is used by "dbsync_finalise" to get the csv document. It doesn't however need to save the file to disk since it is read into memory as part of the document GET.

 

"dbsync_finalise" is initiated by first importing the Assetic Python SDK and authenticating via the ini file (first parameter). The second parameter is the name of the log file. The third parameter is the logging level - one of 'debug', 'warning' or 'info'

import assetic # initialise Assetic Python SDK.  Set logging mode to 'info' asseticsdk=assetic.AsseticSDK('C:/Users/me/assetic.ini','c:/logs/dbsync.log','info')

An instance of the Assetic SDK "SyncToLocalProcesses" library is then initiated

sync = assetic.SyncToLocalProcesses()

The sync finalise process is then run. It only requires the database server and database name since the remaining information required is held in the table assetic_sync_manager

#execute sync finalise sync.dbsync_finalise('dbserver','asseticsync')

The downloaded CSV file data is merged into the target database table. This relies on a "key field" being correctly defined

  • New records are appended to the target database table. A new record is a record that doesn't have a corresponding key in the target database table
  • Edited records are updated in the target table. Edits are determined on the basis of comparing each field in the record downloaded from Assetic with the corresponding record in the target database table.
  • Deleted records are optionally removed from the target database table. A deleted record is determined on the basis that it is in the target database table but not the records downloaded from Assetic. Deleting is optional because an alternative approach is to include the field that records the status of the record in Assetic in the export process. That way the status can be used within SQL queries to either include or exclude deleted records.

 

For each of the actions above, the "log" table is appended to. The log table typically has the same structure as the target table, plus additional columns for the edit action and sync date. If the log table does not exist it will be created to match the target table.

  • For a new record, the log table will just be populated with the edit action "INSERTED", the timestamp, and the key field value for the new record.
  • For an existing record, the log table will be populated with the key edit action "UPDATED", the timestamp, plus the values of the record prior to the update.
  • For a deleted record, the log table will be populated with the key edit action "DELETED", the timestamp, plus the values of the record prior to the delete.

 

The database table assetic_sync_manager is then updated with the status 'Complete', and the 'date_last_modified' field is updated with the current DateTime.

 

Advanced Options

Target Table

The target table is the database table that the data from the search export is written to. The target table is built automatically if it does not already exist. The column names in the table are the field names from the search profile. If the SearchProfileRepresentation property 'useinternalnames' is TRUE then the column names are based on the Assetic internal field names, otherwise, the column names are based on the user-friendly search field names. The target table is built using the metadata for the search profile, via the Assetic REST endpoint GET /api/v2/search/{id}/metadata. The response from this search includes:

Value Description
label The name of each field as presented in the search and in the header row of the export file
name The internal Assetic field name which is abbreviated and does not contain whitespaces in the name
type The data type of each field is based on the "Type" property in the metadata. Note that it does not contain field length, so 'string' data is treated as nvarchar(500)

 

If required, the Assetic Python SDK method SyncToLocalProcesses.build_table_from_search may be used with the optional parameter printonly=TRUE to print the SQL that will be used to create the table. The SQL can then be manually edited to more closely match data types and field lengths.  The SQL is then applied to create the table.

 

Sample script:

  1. # Assetic.BulkExportTableCreation
  2. # Create the target database table for a given search profile
  3. import assetic
  4. # initialise Assetic Python SDK. Set logging mode to 'info'
  5. asseticsdk = assetic.AsseticSDK('C:/Users/myself/assetic.ini', None, 'info')
  6. sync = assetic.SyncToLocalProcesses()
  7. # database connection required.
  8. # Database will be tested for existence of the target table and script will not
  9. # be created if table exists
  10. dbtools = assetic.DB_Tools('dbserver', 'assetic_sync')
  11. # define the profile that the table SQL create syntax is to be built for
  12. profilemapping = assetic.SearchProfileRepresentation()
  13. profilemapping.profileguid = "52e11423-dc76-e511-944d-06ed862954d5"
  14. profilemapping.profilename = "Component Sync"
  15. profilemapping.tablename = "expcomp"
  16. profilemapping.keyfield = "[Component ID]"
  17. profilemapping.logtable = "expcomp_log"
  18. profilemapping.useinternalnames = False
  19. profilemapping.replacespaces = True
  20. profilemapping.spacedelimiter = "_"
  21. profilemapping.set_empty_as_null = True
  22. profilemapping.allow_delete = True
  23. # note that changing profilemapping.useinternalnames = TRUE changes
  24. # the SQL created to use Assetic internal field names for database column names
  25. # get column mappings and data types for search profile and build table
  26. sync.build_table_from_search(dbtools, None, profilemapping, True)

 

Note: The target table does not need to contain all of the fields returned by the search. It may have a subset.
Note: The target table may also have fields not returned by the search

Custom Field Names

A list of assetic.SyncProcessCustomFieldRepresentation instances may be specified via the optional parameter 'custom_field_list' when executing both SyncToLocalProcesses.dbsync_initiate() and SyncToLocalProcesses.dbsync_finalise().

 

For each search profile, the database field name can be changed from the default value to a custom value.  This setting will override any generated field name based on the parameters 'useinternalnames', 'replacespaces', and 'spacedelimiter'.

 

assetic.SyncProcessCustomFieldRepresentation has the following structure

Key Description

profileguid

The Search Profile GUID that the field name customisation applies to

fields

A dictionary where the keys are the default generated Assetic field name (after 'useinternalnames', 'replacespaces', and 'spacedelimiter' settings have been applied), and the custom field name to use. Not all fields in the profile need to be included in the dictionary, only those fields where the name is to be customised.

The sample below illustrates setting custom names for the fields "Asset Zone" and "OCI"

custom_field_list =  []  # set some custom field names custom_fields = assetic.SyncProcessCustomFieldRepresentation custom_fields.profileguid =  "e9a350d7-8078-e611-946c-06edd62954d6" fields = dict() fields["Asset_Zone"]  =  "TheAssetZone" fields["OCI"]  =  "TheOC" custom_fields.custom_field_names = fields # Append the setting to the list custom_field_list.append(custom_fields)

custom_field_list is then applied to the execution of the process (where the database is 'localhost' and the database name 'scratch'):

sync.dbsync_initiate("localhost","scratch",profiles,custom_field_list=custom_field_list)

and:

sync.dbsync_finalise("localhost","scratch", custom_field_list= custom_field_list)

It is necessary to apply these settings in both SyncToLocalProcesses.dbsync_initiate() and SyncToLocalProcesses.dbsync_finalise() in the scenario that the search profile returns less than 10,000 records, as in this case the export will take place immediately using the settings in SyncToLocalProcesses.dbsync_initiate(). If the export occurs via the background worker process then the settings are applied when SyncToLocalProcesses.dbsync_finalise() is executed.

 

Alternatively, a value of 'True' can be provided for the parameter 'always_background_export' of the SyncToLocalProcesses.dbsync_initiate() to ensure that search profile exports always occur via a background worker process.

 

Log Table

The log table will be in the same database as the target table.

 

A record is added to this table for every new record inserted into the target database, and every existing record that is updated. When a record is updated, the values of the record prior to the update are copied to the log table. So the target table always contains the most current data, and the log table contains archived data.

 

The log table may be manually created by using the target table as a template and must also include the following additional fields:
[action] [nvarchar](10)
[syncdate] [datetime]

 
Note: If the log table contains fields not in the target table, these fields will be ignored. If the target table contains fields not in the log table, a warning will be issued in the log file, and the original value for the missing field will not be included in the log. This may be desired if the log file is only being used to record the key field and action.

 

assetic_sync_manager table

 

The assetic_sync_manager table is used to manage the sync process.

 

Fields:

Field Description Type
oid Unique identifier used to uniquely identify each sync job guid
taskid Background worker process ID of the export task nvarchar(50)
profileid Advanced search profile ID nvarchar(50)
profilename Name of the advanced search profile. Is not used in the export process and is included to provide a friendly description of the search. Does not need to be the name of the search profile nvarchar(100)
tablename Name of the target database table nvarchar(50)
keyfield Name of the field in the target database that uniquely identifies each record. Must also be in search results. Must match the internal name if using internal names, else the export column header name nvarchar(50)
logtable The name of the database table to log insert/updates nvarchar(50)
documentid ID of the document generated by the export. It set by the dbsync_finalise process when the export is detected as being complete nvarchar(50)
useinternalnames If False then column headers are used, else Assetic internal field names bit
replacespaces If useinternalnames is True then white spaces in user-friendly field names may be removed, or replaced with the value defined by 'spacedelimiter' bit
spacedelimiter The delimiter that is used to replace the white spaces in the user-friendly field names. Can be NULL nvarchar(50)
setemptyasnull If TRUE, when a field is inserted/updated use NULL where the field is an empty string. The default is FALSE which is to insert/update the field with an empty string bit
allowdelete If TRUE, records that exist in the target database table are deleted from the target database table if not in the records returned from Assetic. Deleted records are written to the log table and can be manually inserted back into the target database table if deleted in error. The default is FALSE. bit
date_initiated Timestamp of when the export was initiated datetime
date_last_modified Timestamp of when the last update to the record was made datetime
status

One of:

  • Completed - the sync has finished. No more action will be taken
  • In Progress - the sync has been started, but not completed. The next time the dbsync_finalise is executed, this task will be actioned
  • Error - an error in the process. No more action will be taken. Can update to 'In Progress' and re-run dbsync_finalise to check error logs
nvarchar(50)

total_records

Total number of records returned by the search profile

int

Total number of records exported. Currently the same as total_records but in the future may reflect the number of records exported by an incremental export.

int

When a process is initiated via SyncToLocalProcesses.dbsync_initiate the task id of the process is written to the field 'taskid'. A unique identifier is also generated and defined for each task in the field 'oid'.

 

The timestamp of when the task was initiated is recorded in the field 'date_initiated

 

The Assetic Python SDK method SyncToLocalProcesses.build_assetic_sync_manager_table may be used to create the table. It requires the following parameters:
dbserver: the name of the SQL Server instance
dbname: the name of the SQL Server database
It assumes the user is able to make a trusted connection to the database.

Search Profile Export

The Assetic REST API endpoint POST /api/v2/search/{id}/export initiates an export of the search results for the given search profile guid "{id}". No search filters can be set via the API, so the search profile must already have search filters defined within the profile.

 

This API provides the same functionality as the 'Export All' button seen in the results grid of Advanced Search profiles within the Assetic application.
The POST response is the guid of the export task. This guid is used to retrieve the status and document ID of the export task via the Assetic REST API endpoint GET /api/v2/backgroundworker/{id}.

 

Sample payload

https://abc.assetic.net/api/v2/search/b856b2e6-ffbc-e611-946c-06edd62954d7/export

Background Worker API

The Assetic REST API endpoint GET /api/v2/backgroundworker/{id} is used to get the status of the generation of a search profile export file. The "{id}" is the guid of the background worker task, which is typically obtained when the export task is generated via the Assetic REST API endpoint POST /api/v2/search/{id}/export

 

The following statuses may be returned via the 'Status' property in the response:

Status Description
Completed The file has been created and is ready for download. The 'DocumentId' property in the response will have the guid of the generated document. The Assetic REST API endpoint GET /api/v2/document/{id}/file is used to get the file
InProgress The export file has not been created. The 'DocumentId' property in the response will have no value
InQueue The export process is queued and has not started
Error There was an error with initiating or running the export

 

The status of the export may also be viewed by navigating to the search profile in the Assetic application and selecting the 'Export' tab. This will show the history and status of all exports.

 

Sample payload:

https://abc.assetic.net/api/v2/backgroundworker/b616b2e6-ffbc-e611-946c-0667d62954d7

Sample response:

{  "Status":  "Completed",  "DocumentId":  "b816b5e6-ffbc-e611-945c-06edd62954d7"  }

Search Profile column mappings
The Assetic REST API endpoint GET /api/v2/search/{id}/metadata is used to get the following information about the given search profile guid "{id}".
Label: The name of each field as presented in the search.
Name: The internal Assetic field name which is abbreviated and does not contain whitespaces in the name.
Type: The datatype of the field. Typical values include:

  • "System.String"
  • "System.Nullable`[System.Double]"
  • "System.Nullable`1[System.DateTime]"
  • "System.Nullable`1[System.Decimal]"